摘要 :
We investigate how the maximum-distance separable (MDS) coding can be incorporated into probabilistic caching to utilize the limited storage space efficiently. In a user-centric clustered wireless network, each caching helper node...
展开
We investigate how the maximum-distance separable (MDS) coding can be incorporated into probabilistic caching to utilize the limited storage space efficiently. In a user-centric clustered wireless network, each caching helper node probabilistically caches a segment of MDS coded sequence of each file. The segment size is optimized to maximize the cache hit probability or successful file retrieval probability. We reveal that the best way of storing files is determined by the condition whether the average amount of MDS coded information stored for the requested file within a user's cluster exceeds the amount required for file retrieval or not. In terms of the cache hit probability maximization, if the condition is not fulfilled, it is proved that storing the complete file with a low probability is optimal. Otherwise, storing either a segment as small as possible with a high probability or a complete file with a low probability, according to a given environment, is shown to be desirable. We also analyze the successful retrieval probability, which accounts for both a cache hit event and successful transmissions from multiple caching helper nodes. Since the successful retrieval probability is in an intractable form, to find the desirable segment size, the theoretically driven algorithms with low search complexity are developed.
收起
摘要 :
Compromised by the ever increasing amount of mobile data traffic generated by mobile devices, edge servers are expected to provide caching resources for users so as to enable low-latency content delivery and release the burden on ...
展开
Compromised by the ever increasing amount of mobile data traffic generated by mobile devices, edge servers are expected to provide caching resources for users so as to enable low-latency content delivery and release the burden on backhaul network. Considering the limited caching capacities of edge servers, it is essential to design an efficient content placement strategy to increase the data offloading ratio, i.e., the proportion of requested contents that are delivered via edge caching rather than backhaul links. In this paper, we propose a spatially cooperative caching strategy for a two-tier heterogeneous network consisting of edge servers and caching helpers aiming at reducing the storage space taken by duplicate contents on caching helpers and increasing hit probability. We develop an analysis framework for the proposed spatially cooperative caching strategy by using tools from stochastic geometry and maximize the hit probability by optimizing the caching probabilities of contents on edge servers and caching helpers. Particular, we derive the closed-form solution of the optimization problem for unit cache size. The simulation results are identical with the analytical ones and the outperformance is observed with the optimal caching probability derived in this paper compared with the existing caching strategies.
收起
摘要 :
This two-part paper investigates cache replacement schemes with the objective of developing a general model to unify the analysis of various replacement schemes and illustrate their features. To achieve this goal, we study the dyn...
展开
This two-part paper investigates cache replacement schemes with the objective of developing a general model to unify the analysis of various replacement schemes and illustrate their features. To achieve this goal, we study the dynamic process of caching in the vector space and introduce the concept of state transition field (STF) to model and characterize replacement schemes. In the first part of this work, we consider the case of time-invariant content popularity based on the independent reference model (IRM). In such case, we demonstrate that the resulting STFs are static, and each replacement scheme leads to a unique STF. The STF determines the expected trace of the dynamic change in the cache state distribution, as a result of content requests and replacements, from any initial point. Moreover, given the replacement scheme, the STF is only determined by the content popularity. Using four example schemes including random replacement (RR) and least recently used (LRU), we show that the STF can be used to analyze replacement schemes such as finding their steady states, highlighting their differences, and revealing insights regarding the impact of knowledge of content popularity. Based on the above results, STF is shown to be useful for characterizing and illustrating replacement schemes. Extensive numeric results are presented to demonstrate analytical STFs and STFs from simulations for the considered example replacement schemes.
收起
摘要 :
This paper presents a new adaptive probabilistic cache algorithm (AProb) for modern caching networks. AProb is based on three main techniques: (1) dynamic probabilistic caching; (2) ghost list; and (3) adaptive probing and protect...
展开
This paper presents a new adaptive probabilistic cache algorithm (AProb) for modern caching networks. AProb is based on three main techniques: (1) dynamic probabilistic caching; (2) ghost list; and (3) adaptive probing and protection. It enables caching systems to quickly adjust their cached data to dynamic content popularity without intervention of network administrators and synchronization. The criteria of this adjustment are based on hit events occurring in AProb data structures. By using AProb, a caching system continuously adapts a caching probability and the ratio between probing and protection partitions of its cache. AProb has constant time complexity and its space overhead is minimal. Extensive computer simulations, which consider various network topologies and traffic traces, show that AProb offers improvement in terms of server-hit ratio, footprint distance, and caching time compared with those provided by several existing cache algorithms.
收起
摘要 :
Accurate timing prediction for real-time embedded software execution is becoming a problem due to the increasing complexity of computer architecture, and the presence of mixed-criticality workloads. Probabilistic caches were propo...
展开
Accurate timing prediction for real-time embedded software execution is becoming a problem due to the increasing complexity of computer architecture, and the presence of mixed-criticality workloads. Probabilistic caches were proposed to set bounds to Worst Case Execution Time (WCET) estimates and help designers improve real-time embedded system resource use. Static Probabilistic Timing Analysis (SPTA) for probabilistic caches is nevertheless difficult to perform, because cache accesses depend on execution history, and the computational complexity of SPTA makes it intractable for calculation as the number of accesses increases. In this paper, we explore and improve SPTA for caches with evict-on-miss random replacement policy using a state space modeling technique. A nonhomogeneous Markov model is employed for single-path programs in discrete-time finite state space representation. To make this Markov model tractable, we limit the number of states and use an adaptive method for state modification. Experiments show that compared to the state-of-the-art methodology, the proposed adaptive Markov chain approach provides better results at the occurrence probability of 10-15: in terms of accuracy, the state-of-the-art SPTA results are more conservative, by 11% more on average. In terms of computation time, our approach is not significantly different from the state-of-the-art SPTA.
收起
摘要 :
We show how randomized caches can be used in resource-poor partial-state routers to provide a fair share of bandwidth to short-lived flows that are known as mice when long-lived flows known as elephants are present.
摘要 :
Even though a probabilistic caching scheme has high potential to be a caching scheme for Content-Centric Networking (CCN) due to its simplicity and effectiveness, there has been little work on analyzing its logical behavior in sin...
展开
Even though a probabilistic caching scheme has high potential to be a caching scheme for Content-Centric Networking (CCN) due to its simplicity and effectiveness, there has been little work on analyzing its logical behavior in single-cache and multi-cache contexts. In this work, we propose an analytical model to evaluate the performance of the probabilistic caching scheme with Least-Recently Used (LRU) replacement policy. We first construct a Markov chain of a single cache based on Independent Reference Model (IRM) and extend it to the case of two-level cache hierarchy. We use the model to compute local cache hit rate, a parameter indicating the performance of a single cache. Moreover, our model reveals some insight into the cache diversity, an important factor contributing to global cache hit rate of a network of caches. Results from computer simulations based on ndnSIM validate the accuracy of our proposed model.
收起
摘要 :
Data caching is widely known as an effective power-saving technique, in which mobile devices use local caches instead of original data placed on a server, in order to reduce the power consumption necessary for network accesses. In...
展开
Data caching is widely known as an effective power-saving technique, in which mobile devices use local caches instead of original data placed on a server, in order to reduce the power consumption necessary for network accesses. In such data caching, a cache invalidation mechanism is important in preventing these devices from unintentionally accessing invalid data. In this paper, we propose a broadcast-based protocol for cache invalidation in a location-aware system. The proposed protocol is designed to reduce the access time required for obtaining necessary invalidation reports through broadcast media and to avoid client-side sleep fragmentation while retrieving the reports. In the proposed protocol, a Bloom filter is used as the data structure of an invalidation report, in order to probabilistically check the invalidation of caches. Furthermore, we propose three broadcast scheduling methods that are intended to achieve flexible broadcasting structured by the Bloom filter: fragmentation avoidance scheduling method (FASM), metrics balancing scheduling method (MBSM), and minimizing access time scheduling method (MASM). The broadcast schedule is arranged for consecutive accesses to geographically neighboring invalidation reports. In addition, the effectiveness of the proposed methods is evaluated by simulation. The results indicate that the MBSM and MASM achieve a high rate of performance scheduling. Compared to the FASM, the MBSM reduces the access time by 34%, while the fragmentations on the resultant schedule increase by 40%, and the MASM reduces the access time by 40%, along with an 85% increase in the number of fragmentations.
收起
摘要 :
Recently some P2P systems have constructed the small world network using the small world model so as to improve the routing performance. In this paper, we propose a novel probabilistic cache scheme to construct the small world ne...
展开
Recently some P2P systems have constructed the small world network using the small world model so as to improve the routing performance. In this paper, we propose a novel probabilistic cache scheme to construct the small world network based on the small world model and use it to improve CAN, that is, PCCAN ( Probabilistic Cache - based CAN ). PCCAN caches the long contact. It uses the worm routing replacing mechanism and probabilistic replacing strategy on the cache. The probabilistic cache scheme proves to be an efficient approach to model the small world phenomenon. Experiments in both the static and the dynamic network show that PCCAN can converge to the steady state with the cache scheme, and the routing performance is significantly improved with additional low overheads in the network compared with CAN.
收起
摘要 :
Information-centric network (ICN) emphasizes on content retrieval without much bothering about the location of its actual producer. This novel networking paradigm makes content retrieval faster and less expensive by shifting data ...
展开
Information-centric network (ICN) emphasizes on content retrieval without much bothering about the location of its actual producer. This novel networking paradigm makes content retrieval faster and less expensive by shifting data provisioning into content holder rather than content owner. Caching is the feature of ICN that makes content serving possible from any intermediate device. An efficient caching is one of the primary requirements for effective deployment of ICN. In this paper, a caching approach with balanced content distribution among network devices is proposed. The selection of contents to be cached is determined through universal and computed using Zipf's law. The dynamic change in popularity of contents is also considered to take make caching decisions. For balancing the cached content across the network, every router keeps track of its neighbor's cache status. Three parameters, the proportionate distance of the router from the client (p(d)), the router congestion (r(c)), and the cache status (c(s)), are contemplated to select a router for caching contents. The new caching approach is evaluated in the simulated environment using ndnSIM-2.0. Three state-of-the-art approaches, Leave Copy Everywhere (LCE), centrality measures-based algorithm (CMBA), and a probability-based caching (probCache), are considered for comparison. The proposed method of caching shows the better performance compared to the other three protocols used in the comparison.
收起